Search results for "Web page"
showing 10 items of 66 documents
Using MSG-Seviri Data to Monitor the Planet in Near Real Time
2018
The SEVIRI (Spinning Enhanced Visible and Infra Red Imager) instrument onboard MSG (Meteosat Second Generation) satellite series provides valuable data for the observation of our planet. We describe here the processing chain implemented at the Global Change Unit of the University of Valencia to provide information such as vegetation index, temperatures of both land and sea, synthetic quicklooks for an easy interpretation of the data as well as fire hotspots. Vegetation index and temperature data are available for download from a dedicated portal updated every 3 hours with the most recent processed data. Additionally, a web page displays this information for a non scientific public in near r…
The gypsy database (GyDB) of mobile genetic elements: release 2.0
2011
This article introduces the second release of the Gypsy Database of Mobile Genetic Elements (GyDB 2.0): a research project devoted to the evolutionary dynamics of viruses and transposable elements based on their phylogenetic classification (per lineage and protein domain). The Gypsy Database (GyDB) is a long-term project that is continuously progressing, and that owing to the high molecular diversity of mobile elements requires to be completed in several stages. GyDB 2.0 has been powered with a wiki to allow other researchers participate in the project. The current database stage and scope are long terminal repeats (LTR) retroelements and relatives. GyDB 2.0 is an update based on the analys…
Creating a questionnaire for a scientific study
2016
Using questionnaires has become a permanent part of collecting data in scientific studies within the sphere of human sciences as well as other disciplines. It has been utilized already for nearly a century in collecting data. The first questionnaires were carried out on paper but nowadays there are e-questionnaires alongside it which can be carried out through e-mail or published on a social media platform (for example, Facebook). An often used method is also a survey which is carried out with a research project's own web page, association or company etc. A questionnaire has been considered as an actual scientific method of data collection since 1930s. However, it was already used a little …
Learning automata-based solutions to the optimal web polling problem modelled as a nonlinear fractional knapsack problem
2011
We consider the problem of polling web pages as a strategy for monitoring the world wide web. The problem consists of repeatedly polling a selection of web pages so that changes that occur over time are detected. In particular, we consider the case where we are constrained to poll a maximum number of web pages per unit of time, and this constraint is typically dictated by the governing communication bandwidth, and by the speed limitations associated with the processing. Since only a fraction of the web pages can be polled within a given unit of time, the issue at stake is one of determining which web pages are to be polled, and we attempt to do it in a manner that maximizes the number of ch…
Knowledge Discovery from Network Logs
2015
Modern communications networks are complex systems, which facilitates malicious behavior. Dynamic web services are vulnerable to unknown intrusions, but traditional cyber security measures are based on fingerprinting. Anomaly detection differs from fingerprinting in that it finds events that differ from the baseline traffic. The anomaly detection methodology can be modelled with the knowledge discovery process. Knowledge discovery is a high-level term for the whole process of deriving actionable knowledge from databases. This article presents the theory behind this approach, and showcases research that has produced network log analysis tools and methods. peerReviewed
DBSCAN Algorithm for Document Clustering
2019
Abstract Document clustering is a problem of automatically grouping similar document into categories based on some similarity metrics. Almost all available data, usually on the web, are unclassified so we need powerful clustering algorithms that work with these types of data. All common search engines return a list of pages relevant to the user query. This list needs to be generated fast and as correct as possible. For this type of problems, because the web pages are unclassified, we need powerful clustering algorithms. In this paper we present a clustering algorithm called DBSCAN – Density-Based Spatial Clustering of Applications with Noise – and its limitations on documents (or web pages)…
Business Constraints Integration in a Search Engine Optimization Fuzzy Decision Support System
2017
The Internet has dramatically changed the way companies can reach their clients. Henceforth, to develop their business online, companies have to drive qualified traffic to their website. For this, their website must be visible in the first pages of search engine results. Search Engine Optimization (SEO) permits the improvement of the position of a website on the search engines results pages. However, the practice of SEO is complex, and results are uncertain, because search engines do not communicate much about their ranking methods. Moreover, this activity takes time, and requires the implementation of modifications that can alter the visual appearance of the webpages. This paper proposes a…
Querying Dynamic and Context-Sensitive Metadata in Semantic Web
2005
RDF (core Semantic Web standard) is not originally appropriate for context representation, because of its initial focus on the ordinary Web resources, such as web pages, files, databases, services, etc., which structure and content are more or less stable. However, on the other hand, emerging industrial applications consider e.g. machines, processes, personnel, services for condition monitoring, remote diagnostics and maintenance, etc. to be specific classes of Web resources and thus a subject for semantic annotation. Such resources are naturally dynamic, not only from the point of view of changing values for some attributes (state of resource), but also from the point of view of changing “…
Influence of online transparency on efficiency. Analysis of spanish NGDOs
2020
This study examines (a) whether nongovernmental development organizations (NGDOs) disseminate relevant information for their stakeholders through their web pages, information that after being reviewed and evaluated by external organizations such as the Spanish Coordinator of Development NGO or Lealtad Foundation, allowed these NGDOs to obtain a seal of transparency and (b) whether their level of transparency influences efficiency. To determine online transparency, web pages of seal-approved NGDOs were reviewed to assess the availability of relevant information. This paper uses data envelopment analysis to assess the efficiency using an input orientation. To determine the influence of online…
A Data-Driven Approach to Dynamically Learn Focused Lexicons for Recognizing Emotions in Social Network Streams
2016
Opinion Mining aims at identifying and classifying subjective information in a collection of documents. A variety of approach exists in literature, ranging from Supervised Learning to Unsupervised Learning. Currently, one of the biggest opinion resource of opinionated texts existing on the Web is represented by Social Networks. Networks are not only a vast collection of documents but they also represent a dynamic evolving resource as the users keep posting their own opinions. We based our work relying on this idea of dynamicity, building an evolving model that updates itself in real time as users submit their posts. This is done through a set of supervised techniques based on a Lexi- con of…